We first load in the data from all participants and transform variables into correct type. We then remove those participants that did not solve enough captchas correctly and only retain some relevant variables. We also prepare remove data from two participants for which for unknown (likely technical) reasons not all data was collected.
Next we load in the data from all bets, create the variables of interest and join with the participants data. Then, we do not forget to set all NAs to zero (participants who did not have betting data are those that skipped betting though have a total bet and win amount of 0). Then we create the oyr DV, proportion of money bet, as follows: \[\texttt{prop_bet} = \frac{\texttt{amount}}{3 + \texttt{total_win}}\]
The analysis is based on the following number of participants
## [1] 1500
## # A tibble: 3 x 2
## exp_cond n
## <fct> <int>
## 1 none 499
## 2 white 500
## 3 yellow 501
Overall bonus paid out was:
## [1] 2.8
If we split bonus by whether or not participants played, we see clearly that playing leads to less payout:
## # A tibble: 2 x 6
## bet_at_all mean_bonus sd_bonus se_bonus max_bonus min_bonus
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 bet 2.77 2.33 0.0648 30 0
## 2 no bet 3 0 0 3 3
Our DV clearly does not look normally distributed.
## # A tibble: 1 x 3
## gamble_at_all gamble_everything proportion_bet_rest
## <dbl> <dbl> <dbl>
## 1 0.862 0.212 0.442
Binomial confidence or credibility intervals for the probability to gamble at all:
## method x n mean lower upper
## 1 agresti-coull 1293 1500 0.8620000 0.8435947 0.8785559
## 2 asymptotic 1293 1500 0.8620000 0.8445460 0.8794540
## 3 bayes 1293 1500 0.8617588 0.8441866 0.8790604
## 4 cloglog 1293 1500 0.8620000 0.8435014 0.8784718
## 5 exact 1293 1500 0.8620000 0.8435037 0.8790656
## 6 logit 1293 1500 0.8620000 0.8436017 0.8785455
## 7 probit 1293 1500 0.8620000 0.8437905 0.8787053
## 8 profile 1293 1500 0.8620000 0.8439340 0.8788308
## 9 lrt 1293 1500 0.8620000 0.8439349 0.8788302
## 10 prop.test 1293 1500 0.8620000 0.8432690 0.8788464
## 11 wilson 1293 1500 0.8620000 0.8436191 0.8785315
We use a custom parameterization of a zero-one-inflated beta-regression model (see also here). The likelihood of the model is given by:
\[\begin{align} f(y) &= (1 - g) & & \text{if } y = 0 \\ f(y) &= g \times e & & \text{if } y = 1 \\ f(y) &= g \times (1 - e) \times \text{Beta}(a,b) & & \text{if } y \notin \{0, 1\} \\ a &= \mu \times \phi \\ b &= (1-\mu) \times \phi \end{align}\]
Where \(1 - g\) is the zero inflation probability, zipp is \(g\) and reflects the probability to gamble, \(e\) is the conditional one-inflation probability (coi) or conditional probability to gamble everything (i.e., conditional probability to have a value of one, if one gambles), \(\mu\) is the mean of the beta distribution (Intercept), and \(\phi\) is the precision of the beta distribution (phi). As we use Stan for modelling, we need to model on the real line and need appropriate link functions. For \phi the link is log (inverse is exp()), for all other parameters it is logit (inverse is plogis()).
We fit this model and add experimental condition as a factor to the three main model parameters (i.e., only the precision parameter is fixed across conditions). The following table provides the overview of the model and all model parameters and show good convergence.
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: new_prop ~ exp_cond
## phi ~ 1
## zipp ~ exp_cond
## coi ~ exp_cond
## Data: part2 (Number of observations: 1500)
## Samples: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup samples = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.23 0.05 -0.34 -0.13 1.00 149668 76179
## phi_Intercept 1.11 0.04 1.04 1.19 1.00 188488 78137
## zipp_Intercept 1.96 0.14 1.69 2.23 1.00 140453 74727
## coi_Intercept -1.31 0.12 -1.54 -1.08 1.00 155451 77174
## exp_condwhite 0.08 0.07 -0.06 0.23 1.00 148566 83237
## exp_condyellow 0.15 0.07 0.00 0.29 1.00 145799 84322
## zipp_exp_condwhite -0.19 0.19 -0.55 0.18 1.00 139533 86972
## zipp_exp_condyellow -0.17 0.19 -0.53 0.20 1.00 136425 84864
## coi_exp_condwhite -0.07 0.17 -0.40 0.26 1.00 148168 85135
## coi_exp_condyellow 0.05 0.17 -0.27 0.38 1.00 144756 82084
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept parameters representing the no message condition or the overall mean (for phi).
We can also plot the three parameters showing the difference distribution of the yellow message condition from the no messafe condition. These differences are given on the logit scale.
Likewise, we can plot the three parameters showing the difference distribution of the white message condition from the no messafe condition. These differences are given on the logit scale.
The model does not have any obvious problems, even without priors for the condition specific effects.
As expected the synthetic data generated from the model looks a lot like the actual data. This suggests that the model is adequate for the data.
We first give the table showing the posterior means and 95% CIs.
## # A tibble: 9 x 8
## # Groups: parameter [3]
## parameter condition estimate .lower .upper .width .point .interval
## <fct> <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Gamble at all? None 0.875 0.845 0.903 0.95 mean qi
## 2 Gamble at all? White 0.853 0.821 0.883 0.95 mean qi
## 3 Gamble at all? Yellow 0.856 0.824 0.885 0.95 mean qi
## 4 Gamble everything? None 0.213 0.176 0.253 0.95 mean qi
## 5 Gamble everything? White 0.202 0.165 0.241 0.95 mean qi
## 6 Gamble everything? Yellow 0.222 0.184 0.262 0.95 mean qi
## 7 Proportion bet? None 0.442 0.416 0.467 0.95 mean qi
## 8 Proportion bet? White 0.463 0.437 0.488 0.95 mean qi
## 9 Proportion bet? Yellow 0.478 0.452 0.504 0.95 mean qi
For the zero-one inflated components, we can compare the model estimates with the data. Not unsurprisingly, they match quite well.
## # A tibble: 3 x 3
## exp_cond gamble_at_all gamble_everything
## <fct> <dbl> <dbl>
## 1 none 0.876 0.213
## 2 white 0.854 0.201
## 3 yellow 0.856 0.221
The following is the main results figure on the level of the message conditions.
We can also focus and look at the difference distributions from the no message condition (i.e., we do not show the “Yellow versus White” comparison, but all below are against no message condition).
## # A tibble: 6 x 8
## # Groups: parameter [3]
## parameter condition estimate .lower .upper .width .point .interval
## <fct> <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Gamble at all? White -0.0218 -0.0643 0.0209 0.95 mean qi
## 2 Gamble at all? Yellow -0.0194 -0.0616 0.0227 0.95 mean qi
## 3 Gamble everything? White -0.0114 -0.0652 0.0431 0.95 mean qi
## 4 Gamble everything? Yellow 0.00853 -0.0466 0.0639 0.95 mean qi
## 5 Proportion bet? White 0.0208 -0.0154 0.0568 0.95 mean qi
## 6 Proportion bet? Yellow 0.0364 0.0000330 0.0727 0.95 mean qi
Same as a figure.
For reference, we below include another plot of the difference distributions but this time include the distribution comparing the Yellow versus White warning message comparison as well.
## # A tibble: 9 x 8
## # Groups: parameter [3]
## parameter condition estimate .lower .upper .width .point .interval
## <fct> <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Gamble at all? Yellow - White 0.00233 -0.0415 0.0459 0.95 mean qi
## 2 Gamble at all? White - None -0.0218 -0.0643 0.0209 0.95 mean qi
## 3 Gamble at all? Yellow - None -0.0194 -0.0616 0.0227 0.95 mean qi
## 4 Gamble everything? Yellow - White 0.0199 -0.0350 0.0750 0.95 mean qi
## 5 Gamble everything? White - None -0.0114 -0.0652 0.0431 0.95 mean qi
## 6 Gamble everything? Yellow - None 0.00853 -0.0466 0.0639 0.95 mean qi
## 7 Proportion bet? Yellow - White 0.0156 -0.0212 0.0521 0.95 mean qi
## 8 Proportion bet? White - None 0.0208 -0.0154 0.0568 0.95 mean qi
## 9 Proportion bet? Yellow - None 0.0364 0.0000330 0.0727 0.95 mean qi
The following plot shows the estimated differences from the no message control condition with overlayed density estimate (in black) and some possible prior distributions in colour (note again that the model did not actually include any priors). These differences are shown on the linear scale before applying the logistic link function. These priors are normal priors (who have a higher peak at 0 compared to Cauchy and t) with different SDs.
In this figure we considered three prior width. For a prior width of SD = 0.25 we expect with 95% that the largest possible effect we observe is 12.01% on the response scale. For a prior width of SD = 0.5 we expect with 95% that the largest possible effect we observe is 22.71% on the response scale. For a prior width of SD = 0.5 we expect with 95% that the largest possible effect we observe is 37.65% on the response scale.
What the figure shows is that for “gamble at all” the evidence is mostly anecdotal for the null hypothesis of no difference. That is, the posterior density at 0 (i.e., y-axis position at which the black line crosses the 0 line) is somewhat larger than the prior density at 0 (i.e., the density of the prior curve at 0). However the ratio of the two densities does not appear to exceed 3. For the most narrow prior (SD = 0.25) the two densities are approximately equal implying no evidence either way (i.e., neither for null nor for a difference).
For “gamble everything” all priors suggest evidence for the null hypothesis, although this evidence only appears to exceed the anecdotal threshold of a ratio of 3 for the wide prior (SD = 1).
For “proportion bet”, we see a similar picture for the comparison with the white message (i.e., anecdotal evidence for the null). For the comparison with the yellow message the data appears to provide evidence for the backfire effect. The other two priors again provide only anecdotal evidence for the null.
We now look at the data but aggregate both gambling message conditions into one (i.e., we only have no message versus message).
## # A tibble: 6 x 8
## # Groups: parameter [3]
## parameter condition estimate .lower .upper .width .point .interval
## <fct> <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Gamble at all? control 0.875 0.845 0.903 0.95 mean qi
## 2 Gamble at all? warning 0.855 0.833 0.876 0.95 mean qi
## 3 Gamble everything? control 0.213 0.176 0.253 0.95 mean qi
## 4 Gamble everything? warning 0.211 0.185 0.239 0.95 mean qi
## 5 Proportion bet? control 0.442 0.416 0.467 0.95 mean qi
## 6 Proportion bet? warning 0.470 0.452 0.489 0.95 mean qi
## # A tibble: 9 x 8
## # Groups: parameter [3]
## parameter condition estimate .lower .upper .width .point .interval
## <fct> <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Gamble at all? Yellow - White 0.00233 -0.0415 0.0459 0.95 mean qi
## 2 Gamble at all? White - None -0.0218 -0.0643 0.0209 0.95 mean qi
## 3 Gamble at all? Yellow - None -0.0194 -0.0616 0.0227 0.95 mean qi
## 4 Gamble everything? Yellow - White 0.0199 -0.0350 0.0750 0.95 mean qi
## 5 Gamble everything? White - None -0.0114 -0.0652 0.0431 0.95 mean qi
## 6 Gamble everything? Yellow - None 0.00853 -0.0466 0.0639 0.95 mean qi
## 7 Proportion bet? Yellow - White 0.0156 -0.0212 0.0521 0.95 mean qi
## 8 Proportion bet? White - None 0.0208 -0.0154 0.0568 0.95 mean qi
## 9 Proportion bet? Yellow - None 0.0364 0.0000330 0.0727 0.95 mean qi
The covariates do not differ between conditions (strong BF evidence for the null).
## Bayes factor analysis
## --------------
## [1] Intercept only : 47.38753 ±0.03%
##
## Against denominator:
## pgsi ~ exp_cond
## ---
## Bayes factor type: BFlinearModel, JZS
## Bayes factor analysis
## --------------
## [1] Intercept only : 32.31008 ±0.03%
##
## Against denominator:
## motives ~ exp_cond
## ---
## Bayes factor type: BFlinearModel, JZS
The following table shows mean and SDs of the covariates per group.
## # A tibble: 3 x 5
## exp_cond pgsi_mean pgsi_sd motives_mean motives_sd
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 none 3.03 4.83 6.01 3.91
## 2 white 2.66 4.30 6.27 3.98
## 3 yellow 2.70 4.40 5.86 3.89
The following plot shows the relationships of the covariates among each other and with proportion bet.
The figures show some relationships which can also be seen when just looking at the correlation.
##
## Pearson's product-moment correlation
##
## data: part2$pgsi and part2$motives
## t = 16.878, df = 1498, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.3563292 0.4414162
## sample estimates:
## cor
## 0.3997335
##
## Pearson's product-moment correlation
##
## data: part2$pgsi and part2$new_prop
## t = 5.6223, df = 1498, p-value = 2.244e-08
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.09382388 0.19296413
## sample estimates:
## cor
## 0.1437547
##
## Pearson's product-moment correlation
##
## data: part2$motives and part2$new_prop
## t = 8.1985, df = 1498, p-value = 5.173e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.1582745 0.2551650
## sample estimates:
## cor
## 0.2072279
The figure below provides an alternative visualisation of the relationships between covariates and betting behaviour. In particular, participants were categorized into one of three experimental betting behavior groups: participants who did not bet at all (“none”, 14% of participants), participants who bet some of their money (68% of participants), and participants who bet “all” of their money (18%). For both gambling scales we see a positive relationship between the betting behavior group and the gambling score. Participants who bet more have on average higher scores on the two gambling scales.
##
## none some all
## 0.1380000 0.6793333 0.1826667
This is supported by Bayesian ANOVAs with Bayes factors of over 400,000 for the effect of betting behavior group on the gambing scale scores. However, this effect was not moderated by gambling message condition. In particular, there was evidence for the absence of both a main effect of gambling message condition and an interaction of gambling message condition with betting behaviour group for both gambling scale scores (Bayes factors for the null > 25).
## Bayes factor analysis
## --------------
## [1] exp_cond : 0.0211026 ±0.03%
## [2] gamble_cat : 474496 ±0.03%
## [3] exp_cond + gamble_cat : 8985.538 ±0.94%
## [4] exp_cond + gamble_cat + exp_cond:gamble_cat : 43.04883 ±4.8%
##
## Against denominator:
## Intercept only
## ---
## Bayes factor type: BFlinearModel, JZS
## denominator
## numerator exp_cond gamble_cat exp_cond + gamble_cat exp_cond + gamble_cat + exp_cond:gamble_cat
## gamble_cat 22485192 1 52.80663 11022.27
## Bayes factor analysis
## --------------
## [1] exp_cond : 0.0309501 ±0.03%
## [2] gamble_cat : 1843612186 ±0.03%
## [3] exp_cond + gamble_cat : 74934793 ±3.65%
## [4] exp_cond + gamble_cat + exp_cond:gamble_cat : 1035100 ±1.87%
##
## Against denominator:
## Intercept only
## ---
## Bayes factor type: BFlinearModel, JZS
## denominator
## numerator exp_cond gamble_cat exp_cond + gamble_cat exp_cond + gamble_cat + exp_cond:gamble_cat
## gamble_cat 59567253708 1 24.60289 1781.096
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: new_prop ~ exp_cond + pgsi_c + motives_c
## phi ~ 1
## zipp ~ exp_cond + pgsi_c + motives_c
## coi ~ exp_cond + pgsi_c + motives_c
## Data: part2 (Number of observations: 1500)
## Samples: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup samples = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.23 0.05 -0.34 -0.13 1.00 137264 79419
## phi_Intercept 1.12 0.04 1.04 1.20 1.00 195173 76707
## zipp_Intercept 2.08 0.14 1.80 2.36 1.00 130576 80893
## coi_Intercept -1.37 0.12 -1.60 -1.14 1.00 137274 80350
## exp_condwhite 0.08 0.07 -0.07 0.23 1.00 134442 88120
## exp_condyellow 0.15 0.07 0.00 0.29 1.00 132740 86367
## pgsi_c 0.01 0.01 -0.01 0.02 1.00 169765 84376
## motives_c 0.02 0.01 0.00 0.04 1.00 171755 83729
## zipp_exp_condwhite -0.23 0.19 -0.60 0.14 1.00 136926 88963
## zipp_exp_condyellow -0.16 0.19 -0.53 0.21 1.00 136948 86754
## zipp_pgsi_c -0.01 0.02 -0.05 0.03 1.00 157548 78763
## zipp_motives_c 0.15 0.02 0.10 0.19 1.00 138593 83935
## coi_exp_condwhite -0.05 0.17 -0.38 0.29 1.00 137691 86016
## coi_exp_condyellow 0.07 0.17 -0.25 0.40 1.00 138616 86998
## coi_pgsi_c 0.07 0.01 0.04 0.09 1.00 162048 83318
## coi_motives_c 0.02 0.02 -0.02 0.05 1.00 156239 84186
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept pasrameters representing the no message condition or the overall mean (for phi).
We can also plot the three parameters showing the difference distribution of the yellow message condition from the no messafe condition. These differences are given on the logit scale.
Likewise, we can plot the three parameters showing the difference distribution of the white message condition from the no messafe condition. These differences are given on the logit scale.
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: new_prop ~ exp_cond + pgsi_c
## phi ~ 1
## zipp ~ exp_cond + pgsi_c
## coi ~ exp_cond + pgsi_c
## Data: part2 (Number of observations: 1500)
## Samples: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup samples = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.23 0.05 -0.34 -0.13 1.00 159391 80938
## phi_Intercept 1.11 0.04 1.04 1.19 1.00 204131 73394
## zipp_Intercept 1.96 0.14 1.70 2.24 1.00 145598 77608
## coi_Intercept -1.36 0.12 -1.60 -1.13 1.00 161466 77966
## exp_condwhite 0.09 0.07 -0.06 0.23 1.00 151499 87134
## exp_condyellow 0.15 0.07 0.01 0.30 1.00 153559 86129
## pgsi_c 0.02 0.01 0.00 0.03 1.00 228838 75634
## zipp_exp_condwhite -0.18 0.19 -0.54 0.19 1.00 139729 84946
## zipp_exp_condyellow -0.16 0.19 -0.53 0.21 1.00 142958 83545
## zipp_pgsi_c 0.04 0.02 0.00 0.08 1.00 209765 76834
## coi_exp_condwhite -0.04 0.17 -0.37 0.29 1.00 153706 84801
## coi_exp_condyellow 0.07 0.17 -0.26 0.40 1.00 154574 84384
## coi_pgsi_c 0.07 0.01 0.04 0.10 1.00 203389 75680
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept parameters representing the no message condition or the overall mean (for phi).
We can also plot the three parameters showing the difference distribution of the yellow message condition from the no message condition. These differences are given on the logit scale.
Likewise, we can plot the three parameters showing the difference distribution of the white message condition from the no message condition. These differences are given on the logit scale.
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: new_prop ~ exp_cond + motives_c
## phi ~ 1
## zipp ~ exp_cond + motives_c
## coi ~ exp_cond + motives_c
## Data: part2 (Number of observations: 1500)
## Samples: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup samples = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.23 0.05 -0.34 -0.13 1.00 161830 79134
## phi_Intercept 1.12 0.04 1.04 1.20 1.00 207137 73950
## zipp_Intercept 2.07 0.14 1.80 2.36 1.00 148292 78001
## coi_Intercept -1.33 0.12 -1.57 -1.11 1.00 162141 79521
## exp_condwhite 0.08 0.07 -0.07 0.22 1.00 151764 84823
## exp_condyellow 0.14 0.07 -0.00 0.29 1.00 153604 86860
## motives_c 0.02 0.01 0.01 0.04 1.00 220333 75273
## zipp_exp_condwhite -0.22 0.19 -0.60 0.15 1.00 146356 85015
## zipp_exp_condyellow -0.15 0.19 -0.53 0.22 1.00 146991 84726
## zipp_motives_c 0.14 0.02 0.10 0.18 1.00 178265 83096
## coi_exp_condwhite -0.08 0.17 -0.41 0.25 1.00 152219 84658
## coi_exp_condyellow 0.06 0.17 -0.27 0.38 1.00 153589 87209
## coi_motives_c 0.05 0.02 0.02 0.08 1.00 209624 75764
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept pasrameters representing the no message condition or the overall mean (for phi).
We can also plot the three parameters showing the difference distribution of the yellow message condition from the no message condition. These differences are given on the logit scale.
Likewise, we can plot the three parameters showing the difference distribution of the white message condition from the no message condition. These differences are given on the logit scale.
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: new_prop ~ exp_cond * (pgsi_c + motives_c)
## phi ~ 1
## zipp ~ exp_cond * (pgsi_c + motives_c)
## coi ~ exp_cond * (pgsi_c + motives_c)
## Data: part2 (Number of observations: 1500)
## Samples: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup samples = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.24 0.05 -0.34 -0.13 1.00 158342 76097
## phi_Intercept 1.13 0.04 1.05 1.20 1.00 184954 74433
## zipp_Intercept 2.16 0.16 1.85 2.49 1.00 86130 70226
## coi_Intercept -1.38 0.12 -1.63 -1.15 1.00 134985 78058
## exp_condwhite 0.09 0.08 -0.06 0.24 1.00 145747 81616
## exp_condyellow 0.16 0.08 0.01 0.30 1.00 148752 84007
## pgsi_c -0.01 0.01 -0.04 0.01 1.00 84979 79829
## motives_c 0.04 0.01 0.01 0.07 1.00 79388 78932
## exp_condwhite:pgsi_c 0.04 0.02 0.00 0.08 1.00 99695 81421
## exp_condyellow:pgsi_c 0.03 0.02 -0.00 0.07 1.00 98713 83957
## exp_condwhite:motives_c -0.02 0.02 -0.06 0.02 1.00 87967 83521
## exp_condyellow:motives_c -0.04 0.02 -0.08 0.00 1.00 89525 82219
## zipp_exp_condwhite -0.35 0.21 -0.76 0.05 1.00 98841 80318
## zipp_exp_condyellow -0.17 0.22 -0.61 0.26 1.00 95718 78198
## zipp_pgsi_c -0.03 0.03 -0.09 0.03 1.00 76351 74195
## zipp_motives_c 0.19 0.05 0.11 0.29 1.00 61798 66661
## zipp_exp_condwhite:pgsi_c 0.04 0.05 -0.06 0.13 1.00 91370 79221
## zipp_exp_condyellow:pgsi_c 0.04 0.05 -0.06 0.14 1.00 91206 75879
## zipp_exp_condwhite:motives_c -0.10 0.06 -0.22 0.01 1.00 69073 75699
## zipp_exp_condyellow:motives_c -0.02 0.06 -0.14 0.10 1.00 69843 76046
## coi_exp_condwhite -0.03 0.18 -0.37 0.31 1.00 133825 82024
## coi_exp_condyellow 0.08 0.17 -0.25 0.42 1.00 132528 81794
## coi_pgsi_c 0.05 0.02 0.00 0.09 1.00 84270 81113
## coi_motives_c 0.06 0.03 -0.01 0.12 1.00 75499 77397
## coi_exp_condwhite:pgsi_c 0.01 0.04 -0.06 0.08 1.00 96565 82520
## coi_exp_condyellow:pgsi_c 0.05 0.04 -0.02 0.12 1.00 93175 83688
## coi_exp_condwhite:motives_c -0.04 0.05 -0.13 0.05 1.00 84410 79773
## coi_exp_condyellow:motives_c -0.09 0.05 -0.18 0.00 1.00 85118 81688
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept parameters representing the no message condition or the overall mean (for phi).
We can also plot the three parameters showing the difference distribution of the yellow message condition from the no messafe condition. These differences are given on the logit scale.
Likewise, we can plot the three parameters showing the difference distribution of the white message condition from the no messafe condition. These differences are given on the logit scale.
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: new_prop ~ exp_cond * pgsi_c
## phi ~ 1
## zipp ~ exp_cond * pgsi_c
## coi ~ exp_cond * pgsi_c
## Data: part2 (Number of observations: 1500)
## Samples: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup samples = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.24 0.05 -0.34 -0.13 1.00 111165 81052
## phi_Intercept 1.12 0.04 1.04 1.19 1.00 126304 75115
## zipp_Intercept 1.96 0.14 1.70 2.24 1.00 98317 73486
## coi_Intercept -1.36 0.12 -1.60 -1.12 1.00 100580 75698
## exp_condwhite 0.10 0.07 -0.05 0.25 1.00 111493 83929
## exp_condyellow 0.16 0.07 0.01 0.30 1.00 111877 83332
## pgsi_c -0.00 0.01 -0.03 0.02 1.00 82005 74673
## exp_condwhite:pgsi_c 0.04 0.02 0.00 0.08 1.00 91100 78006
## exp_condyellow:pgsi_c 0.02 0.02 -0.01 0.06 1.00 91600 79265
## zipp_exp_condwhite -0.17 0.19 -0.54 0.20 1.00 101519 80722
## zipp_exp_condyellow -0.13 0.19 -0.50 0.25 1.00 101518 79014
## zipp_pgsi_c 0.03 0.03 -0.03 0.09 1.00 77116 65364
## zipp_exp_condwhite:pgsi_c 0.02 0.05 -0.08 0.11 1.00 81445 71879
## zipp_exp_condyellow:pgsi_c 0.05 0.05 -0.05 0.15 1.00 85836 75392
## coi_exp_condwhite -0.04 0.17 -0.38 0.29 1.00 108607 81856
## coi_exp_condyellow 0.06 0.17 -0.28 0.39 1.00 107582 83322
## coi_pgsi_c 0.06 0.02 0.02 0.11 1.00 78741 74129
## coi_exp_condwhite:pgsi_c -0.00 0.03 -0.07 0.06 1.00 90232 78435
## coi_exp_condyellow:pgsi_c 0.02 0.03 -0.04 0.08 1.00 88966 82151
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept parameters representing the no message condition or the overall mean (for phi).
We can also plot the three parameters showing the difference distribution of the yellow message condition from the no messafe condition. These differences are given on the logit scale.
Likewise, we can plot the three parameters showing the difference distribution of the white message condition from the no messafe condition. These differences are given on the logit scale.
## Family: zoib2
## Links: mu = logit; phi = log; zipp = logit; coi = logit
## Formula: new_prop ~ exp_cond * motives_c
## phi ~ 1
## zipp ~ exp_cond * motives_c
## coi ~ exp_cond * motives_c
## Data: part2 (Number of observations: 1500)
## Samples: 4 chains, each with iter = 26000; warmup = 1000; thin = 1;
## total post-warmup samples = 1e+05
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.23 0.05 -0.34 -0.13 1.00 134148 72302
## phi_Intercept 1.12 0.04 1.04 1.20 1.00 159140 72579
## zipp_Intercept 2.13 0.16 1.83 2.46 1.00 92991 71805
## coi_Intercept -1.36 0.12 -1.60 -1.13 1.00 124136 74903
## exp_condwhite 0.08 0.07 -0.07 0.22 1.00 129882 79501
## exp_condyellow 0.15 0.07 -0.00 0.29 1.00 132075 78427
## motives_c 0.03 0.01 0.01 0.06 1.00 94948 77947
## exp_condwhite:motives_c -0.00 0.02 -0.04 0.04 1.00 102110 82095
## exp_condyellow:motives_c -0.03 0.02 -0.06 0.01 1.00 102118 83266
## zipp_exp_condwhite -0.33 0.21 -0.74 0.07 1.00 100130 79667
## zipp_exp_condyellow -0.16 0.22 -0.59 0.27 1.00 98266 80934
## zipp_motives_c 0.18 0.04 0.10 0.26 1.00 75550 70054
## zipp_exp_condwhite:motives_c -0.08 0.05 -0.19 0.02 1.00 84123 77488
## zipp_exp_condyellow:motives_c -0.00 0.06 -0.11 0.11 1.00 82599 76083
## coi_exp_condwhite -0.05 0.17 -0.39 0.29 1.00 120755 78000
## coi_exp_condyellow 0.09 0.17 -0.24 0.43 1.00 123588 80617
## coi_motives_c 0.08 0.03 0.02 0.14 1.00 90171 76917
## coi_exp_condwhite:motives_c -0.03 0.04 -0.12 0.05 1.00 97615 81334
## coi_exp_condyellow:motives_c -0.06 0.04 -0.14 0.02 1.00 97614 79295
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
As a visual convergence check, we plot the density and trace plots for the four intercept parameters representing the no message condition or the overall mean (for phi).
We can also plot the three parameters showing the difference distribution of the yellow message condition from the no message condition. These differences are given on the logit scale.
Likewise, we can plot the three parameters showing the difference distribution of the white message condition from the no messafe condition. These differences are given on the logit scale.
Do either of the warning labels affect the riskiness of bets chosen in the roulette? Different bets come with different potential payoffs in roulette. If £1 is bet, then this can provide a total payoff of between £2 (e.g., bets on red or black) and £36 (bets on a single number). These numbers, 2 and 36, are the decimal odds for these two bets, representing the total potential payoff Additionally, gamblers can place multiple bets per spin, (e.g., betting £0.50 on red and £0.50 on 7). Since the number 7 is a red colour, the bet on 7 can only win if the first bet on red also wins. Multiple bets per spin can either be placed in a way that accentuates risk, as in this example, or in a way that hedges risk (for example, a bet on black added to the bet on 7, or betting on reds and blacks together). The purpose of this exploratory analysis is to see if either warning label affects risk taking.
Following our pre-registration we have measures the amount of risk taken by looking at the variance of the decimal odds for each number in the roulette table. It will measure the concentration of the bet, with more risk represented by more concentrated bets (e.g., betting on a single number), and lower risk represented by more spread bets (e.g., betting on reds, betting on evens). For example, if a participant places a bet on number 7, the decimal odds for that number is 36 (36 times the amount bet if the roulette stops on the number 7). If a participant places a bet on red, then every red number on the table gets a decimal odd of 2 (the participant will win twice the amount bet if the roulette stops on any red number). In order to calculate the proposed risk variable, we will also assign decimal odds of zero for the numbers in which participants did not bet (since they win zero times the amount bet if the roulette stops on those numbers). In the two examples above, every number except 7 will be assigned zero, and every non-red number will be assigned zero, respectively. The risk variable will be the variance of the array of decimal odds for every number on the roulette table, taking into account the bets the participant has placed, and including zeroes for non-winning numbers. The value of this risk variable can range between 0.03 and 35.03. The value cannot be zero in our task because participants are not able to bet the same amount across all numbers including the zero due to the limitations in the values of the available tokens and total bet amounts. Higher numbers indicate more risk taking (with the highest, 35.03, associated with concentrating the bet on a single number). Lower numbers indicate lower risk taking (with the lowest, 0.03, associated with spreading the bet across every red and black number, excluding zero).
The following plot shows the distribution of this variable after aggregating within participants. The left plot shows the original variable on the scale from 0.03 to 35.03. The middle plot shows the variable after dividing by 36 so that the variable ranges from just above 0 to just below 1. The right plot shows the variable after subtracting 0.03 and dividing by 35 so that the smallest value is mapped to 0 and the largest value is mapped to 1.
We first analyse the data with a beta-regression model. For this, the data needs to be between 0 and 1, but exclude exactly 0 and 1. Consequently, we use the transformation shown above in the middle panel. As before, we fit the data and allow for an effect of gambling message.
The model does not show any obvious problems. In addition, we can see that the 95%-CIs for the gambling message specific effects both include 0.
## Family: beta
## Links: mu = logit; phi = log
## Formula: average_odds2 ~ exp_cond
## phi ~ 1
## Data: bets_av (Number of observations: 1293)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -2.09 0.05 -2.18 -1.99 1.00 2755 2690
## phi_Intercept 1.77 0.04 1.68 1.85 1.00 2850 2695
## exp_condwhite -0.03 0.06 -0.16 0.10 1.00 2926 2830
## exp_condyellow -0.08 0.06 -0.21 0.04 1.00 2952 2955
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
The posterior predictive distribution shows some problems, but at least the shape of the synthetic data is similar to the distribution of the actual data (posterior predictive distributions of a model assuming a Gaussian response distribution was considerably worse and is therefore not incldued here).
The following tables show the average riskiness per warning label and differences from the no warning message group on the fitted scale, i.e., the (0, 1) scale used for the beta regression.
## # A tibble: 3 x 7
## condition estimate .lower .upper .width .point .interval
## <chr> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 None 0.110 0.101 0.120 0.95 mean qi
## 2 White 0.107 0.0978 0.117 0.95 mean qi
## 3 Yellow 0.102 0.0935 0.111 0.95 mean qi
## # A tibble: 2 x 7
## condition estimate .lower .upper .width .point .interval
## <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 White -0.00299 -0.0151 0.00943 0.95 mean qi
## 2 Yellow -0.00800 -0.0200 0.00368 0.95 mean qi
The following plot shows the distribution of the posteriors (left) and difference from the no message gorup (right). Both of these variables are back-transformed on the original scale from 0.03 to 35.03.
## Warning: 'geom_halfeyeh' is deprecated.
## Use 'stat_halfeye' instead.
## See help("Deprecated") and help("tidybayes-deprecated").
As is clear from the plots, there is no evidence for a difference. Furthermore, there is some evidence for no difference for the white message, but only ambigous evidence for the yellow message.
We also attempt to fit a zero-one inflated beta regression model to the data transformed onto the [0, 1] scale (i.e., right panel in the histograms above). Note that this model uses the brms default parameterisation for ZOIBR models and not the custom ones used above.
This model also does not show any effect of message condition as none of the effects involving message condition (i.e., variable names containing exp) excludes 0 from theit 95% CI.
## Family: zero_one_inflated_beta
## Links: mu = logit; phi = log; zoi = logit; coi = logit
## Formula: average_odds3 ~ exp_cond
## phi ~ 1
## zoi ~ exp_cond
## coi ~ exp_cond
## Data: bets_av (Number of observations: 1293)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -2.39 0.04 -2.47 -2.30 1.00 3714 2675
## phi_Intercept 2.37 0.04 2.29 2.46 1.00 3693 3068
## zoi_Intercept -4.14 0.39 -4.95 -3.43 1.00 3753 2718
## coi_Intercept 0.86 0.88 -0.76 2.76 1.00 4633 3307
## exp_condwhite -0.05 0.06 -0.17 0.06 1.00 3925 2914
## exp_condyellow -0.09 0.06 -0.20 0.02 1.00 4056 2970
## zoi_exp_condwhite -0.32 0.61 -1.57 0.83 1.00 3441 2839
## zoi_exp_condyellow 0.04 0.54 -1.04 1.08 1.00 3654 2258
## coi_exp_condwhite 4.39 3.48 -0.57 12.88 1.00 2117 1458
## coi_exp_condyellow -0.69 1.19 -3.10 1.60 1.00 4548 2439
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Furthermore, the posterior predictive distribution is not actually better. Hence, we prefer the beta regression model above.
## R version 4.1.0 (2021-05-18)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 18.04.5 LTS
##
## Matrix products: default
## BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1
##
## locale:
## [1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C LC_TIME=en_GB.UTF-8
## [4] LC_COLLATE=en_GB.UTF-8 LC_MONETARY=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8
## [7] LC_PAPER=en_GB.UTF-8 LC_NAME=C LC_ADDRESS=C
## [10] LC_TELEPHONE=C LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] binom_1.1-1 BayesFactor_0.9.12-4.2 Matrix_1.3-3 coda_0.19-3
## [5] tidybayes_2.1.1 brms_2.13.0 Rcpp_1.0.4.6 forcats_0.5.0
## [9] stringr_1.4.0 dplyr_1.0.0 purrr_0.3.4 readr_1.3.1
## [13] tidyr_1.1.0 tibble_3.0.1 ggplot2_3.3.2 tidyverse_1.3.0
## [17] checkpoint_1.0.0
##
## loaded via a namespace (and not attached):
## [1] colorspace_1.4-1 ellipsis_0.3.1 ggridges_0.5.2 rsconnect_0.8.16
## [5] estimability_1.3 markdown_1.1 base64enc_0.1-3 fs_1.4.2
## [9] rstudioapi_0.11 farver_2.0.3 rstan_2.19.3 MatrixModels_0.4-1
## [13] svUnit_1.0.3 DT_0.14 fansi_0.4.1 mvtnorm_1.1-1
## [17] lubridate_1.7.9 xml2_1.3.2 splines_4.1.0 bridgesampling_1.0-0
## [21] knitr_1.36 shinythemes_1.1.2 bayesplot_1.7.2 jsonlite_1.7.2
## [25] broom_0.5.6 dbplyr_1.4.4 ggdist_2.1.1 shiny_1.5.0
## [29] compiler_4.1.0 httr_1.4.1 emmeans_1.4.8 backports_1.1.8
## [33] assertthat_0.2.1 fastmap_1.1.0 cli_2.0.2 later_1.1.0.1
## [37] htmltools_0.5.2 prettyunits_1.1.1 tools_4.1.0 igraph_1.2.5
## [41] gtable_0.3.0 glue_1.4.1 reshape2_1.4.4 cellranger_1.1.0
## [45] jquerylib_0.1.4 vctrs_0.3.1 nlme_3.1-152 crosstalk_1.1.0.1
## [49] xfun_0.26 ps_1.3.3 rvest_0.3.5 mime_0.9
## [53] miniUI_0.1.1.1 lifecycle_0.2.0 gtools_3.8.2 zoo_1.8-8
## [57] scales_1.1.1 colourpicker_1.0 hms_0.5.3 promises_1.1.1
## [61] Brobdingnag_1.2-6 parallel_4.1.0 inline_0.3.15 shinystan_2.5.0
## [65] yaml_2.2.1 pbapply_1.4-2 gridExtra_2.3 loo_2.2.0
## [69] StanHeaders_2.21.0-5 sass_0.4.0 stringi_1.7.5 highr_0.8
## [73] dygraphs_1.1.1.6 pkgbuild_1.0.8 rlang_0.4.11 pkgconfig_2.0.3
## [77] matrixStats_0.56.0 evaluate_0.14 lattice_0.20-44 labeling_0.3
## [81] rstantools_2.0.0 htmlwidgets_1.5.1 cowplot_1.0.0 tidyselect_1.1.0
## [85] processx_3.4.2 ggsci_2.9 plyr_1.8.6 magrittr_2.0.1
## [89] R6_2.5.1 generics_0.0.2 DBI_1.1.0 mgcv_1.8-35
## [93] pillar_1.4.4 haven_2.3.1 withr_2.2.0 xts_0.12-0
## [97] abind_1.4-5 modelr_0.1.8 crayon_1.3.4 arrayhelpers_1.1-0
## [101] utf8_1.1.4 rmarkdown_2.11 grid_4.1.0 readxl_1.3.1
## [105] blob_1.2.1 callr_3.4.3 threejs_0.3.3 reprex_0.3.0
## [109] digest_0.6.28 xtable_1.8-4 httpuv_1.5.4 RcppParallel_5.0.2
## [113] stats4_4.1.0 munsell_0.5.0 bslib_0.3.1 shinyjs_1.1